尽管深层生成模型在图像处理,自然语言处理和强化学习方面已经成功,但由于其梯度估计过程的较高差异,涉及离散随机变量的培训仍然具有挑战性。蒙特卡洛是大多数降低方法中使用的常见解决方案。但是,这涉及耗时的重采样和多功能评估。我们提出了一个张开的直通(GST)估计器,以减少方差,而不会产生重新采样开销。该估计器的灵感来自直通牙龈 - 软胶的基本属性。我们确定这些特性,并通过消融研究表明它们是必不可少的。实验表明,与在两个离散的深层生成建模任务:MNIST-VAE和LISTOPS上相比,所提出的GST估计器与强基础相比具有更好的性能。
translated by 谷歌翻译
安全关键型应用程序要求控制器/政策能够保证安全高度信心。如果我们可以访问地面真实的系统动态,控制屏障功能是一种有用的工具,可以保证安全。在实践中,我们对系统动态的知识不准确,这可能导致不安全的行为导致的残余动力学。使用确定性机器学习模型学习剩余动态可以防止不安全的行为,但是当预测不完美时可能会失败。在这种情况下,概率学习方法,其预测的不确定性的原因可以有助于提供强大的安全利润。在这项工作中,我们使用高斯过程来模拟残余动力学的投影到控制屏障功能上。我们提出了一种新颖的优化程序,以产生安全控制,可以保证具有高概率的安全性。安全滤波器具有推理来自GP预测的不确定性的能力。我们通过SEGWAY和四轮车模拟的实验展示了这种方法的功效。与具有神经网络的确定性方法相比,我们所提出的概率方法能够显着降低安全违规的数量。
translated by 谷歌翻译
由于大多数工业控制应用程序使用PID控制器,PID调谐和防风措施是显着的问题。本文通过背部计算和自动差异化工具调整PID控制器的反馈增益。特别是,我们揭开了成本函数来生成渐变并执行梯度下降以提高控制器性能。我们提供了一个理论框架,用于分析这种非凸优化,并建立背部计算和干扰反馈策略之间的关系。我们包括具有致动器饱和度的线性系统的数值实验,以显示这种方法的功效。
translated by 谷歌翻译
This paper presents a novel approach to the acquisition of language models from corpora. The framework builds on Cobweb, an early system for constructing taxonomic hierarchies of probabilistic concepts that used a tabular, attribute-value encoding of training cases and concepts, making it unsuitable for sequential input like language. In response, we explore three new extensions to Cobweb -- the Word, Leaf, and Path variants. These systems encode each training case as an anchor word and surrounding context words, and they store probabilistic descriptions of concepts as distributions over anchor and context information. As in the original Cobweb, a performance element sorts a new instance downward through the hierarchy and uses the final node to predict missing features. Learning is interleaved with performance, updating concept probabilities and hierarchy structure as classification occurs. Thus, the new approaches process training cases in an incremental, online manner that it very different from most methods for statistical language learning. We examine how well the three variants place synonyms together and keep homonyms apart, their ability to recall synonyms as a function of training set size, and their training efficiency. Finally, we discuss related work on incremental learning and directions for further research.
translated by 谷歌翻译
The evaluation of abstractive summarization models typically uses test data that is identically distributed as training data. In real-world practice, documents to be summarized may contain input noise caused by text extraction artifacts or data pipeline bugs. The robustness of model performance under distribution shift caused by such noise is relatively under-studied. We present a large empirical study quantifying the sometimes severe loss in performance (up to 12 ROUGE-1 points) from different types of input noise for a range of datasets and model sizes. We then propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any extra training, auxiliary models, or even prior knowledge of the type of noise. Our proposed approach effectively mitigates the loss in performance, recovering a large fraction of the performance drop, sometimes as large as 11 ROUGE-1 points.
translated by 谷歌翻译
Text-guided image editing can have a transformative impact in supporting creative applications. A key challenge is to generate edits that are faithful to input text prompts, while consistent with input images. We present Imagen Editor, a cascaded diffusion model built, by fine-tuning Imagen on text-guided image inpainting. Imagen Editor's edits are faithful to the text prompts, which is accomplished by using object detectors to propose inpainting masks during training. In addition, Imagen Editor captures fine details in the input image by conditioning the cascaded pipeline on the original high resolution image. To improve qualitative and quantitative evaluation, we introduce EditBench, a systematic benchmark for text-guided image inpainting. EditBench evaluates inpainting edits on natural and generated images exploring objects, attributes, and scenes. Through extensive human evaluation on EditBench, we find that object-masking during training leads to across-the-board improvements in text-image alignment -- such that Imagen Editor is preferred over DALL-E 2 and Stable Diffusion -- and, as a cohort, these models are better at object-rendering than text-rendering, and handle material/color/size attributes better than count/shape attributes.
translated by 谷歌翻译
This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems. Building on expertise that began with cooling Google's data centers more efficiently, we recently conducted live experiments on two real-world facilities in partnership with Trane Technologies, a building management system provider. These live experiments had a variety of challenges in areas such as evaluation, learning from offline data, and constraint satisfaction. Our paper describes these challenges in the hope that awareness of them will benefit future applied RL work. We also describe the way we adapted our RL system to deal with these challenges, resulting in energy savings of approximately 9% and 13% respectively at the two live experiment sites.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
在因果推理和强盗文献中,基于观察数据的线性功能估算线性功能的问题是规范的。我们分析了首先估计治疗效果函数的广泛的两阶段程序,然后使用该数量来估计线性功能。我们证明了此类过程的均方误差上的非反应性上限:这些边界表明,为了获得非反应性最佳程序,应在特定加权$ l^2 $中最大程度地估算治疗效果的误差。 -规范。我们根据该加权规范的约束回归分析了两阶段的程序,并通过匹配非轴突局部局部最小值下限,在有限样品中建立了实例依赖性最优性。这些结果表明,除了取决于渐近效率方差之外,最佳的非质子风险除了取决于样本量支持的最富有函数类别的真实结果函数与其近似类别之间的加权规范距离。
translated by 谷歌翻译
我们提出了连续表示的时间扩展变化,我们称其为t-SR。 T-SR通过在原始动作重复序列上构造后继表示,捕获了时间扩展动作的预期状态过渡动力学。这种时间抽象的这种形式不能学习相关任务结构的自上而下的层次结构,而是对耦合动作和动作重复的自下而上的组成。这减少了在没有学习层次政策的情况下控制中所需的决策数量。因此,T-SR直接考虑了时间扩展的动作序列的时间范围,而无需预定义或域特异性选项。我们表明,在具有动态奖励结构的环境中,T-SR能够利用后继表示的灵活性和时间扩展的动作提供的抽象。因此,在一系列稀疏的网格世界环境中,T-SR最佳地适应策略远比基于可比的无模型的强化学习方法快得多。我们还表明,T-SR学到的解决这些任务的方式要求学习的策略的始终如一的频率比非临时扩展的策略少。
translated by 谷歌翻译